Feedforward Learning of Mixture Models
نویسندگان
چکیده
We develop a biologically-plausible learning rule that provably converges to the class means of general mixture models. This rule generalizes the classical BCM neural rule within a tensor framework, substantially increasing the generality of the learning problem it solves. It achieves this by incorporating triplets of samples from the mixtures, which provides a novel information processing interpretation to spike-timing-dependent plasticity. We provide both proofs of convergence, and a close fit to experimental data on STDP.
منابع مشابه
Active Learning with Statistical Models
For many types of machine learning algorithms, one can compute the statistically \optimal" way to select training data. In this paper, we review how optimal data selection techniques have been used with feedforward neural networks. We then show how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally we...
متن کاملFeedforward Inhibition and Synaptic Scaling – Two Sides of the Same Coin?
Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforwa...
متن کاملDeep neural networks for acoustic modeling in speech recognition
Most current speech recognition systems use hidden Markov models (HMMs) to deal with the temporal variability of speech and Gaussian mixture models to determine how well each state of each HMM fits a frame or a short window of frames of coefficients that represents the acoustic input. An alternative way to evaluate the fit is to use a feedforward neural network that takes several frames of coef...
متن کاملAn Introduction to Learning Structured Information
By and large, connectionist models have been successfully employed for solving learning tasks characterized by relatively poor data types. For example, feedforward neural networks and probabilistic mixture models can only deal with static data types such as records or fixed-size numerical arrays. Sequences are the first significant improvement over static data. There are two important new issue...
متن کاملLearning Mixture Models for Classification with Energy Combination
In this article, we propose a technique called Energy Mixture Model (emm) for classification. emm is a type of feed-forward neural network that can be used to decide the number of nodes for constructing the hidden layer of neural networks based on the variable clustering method. Additionally, energy combination method is used to generate the recognition pattern as the basis for classification. ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2014